All the material presented here, to the extent it is original, is available under CC-BY-SA.
09:00-09:45 Introduction and set-up
10:00-11:30 Visualization and mapping with coordinate reference systems
11:30-12:00 Discussion
13:00-14:00 Visualization and mapping with coordinate reference systems (exercises)
14:00-14:15 #30DayMapChallenge https://30daymapchallenge.com/ https://fosstodon.org/web/tags/30DayMapChallenge https://twitter.com/search?q=%252330DayMapChallenge
14:30-15:30 Finding systematic patterns in variable/geographic space
09:00-09:45 Jakub Nowosad (zoom) Q&A Finding systematic patterns in geographic space
10:00-11:00 Finding systematic patterns in variable/geographic space (exercises)
11:00-12:00 External libraries, links to GRASS GIS, raster/terra
13:00-13:30 Geodemographics
13:30-14:30 Links to GIS, raster/terra, geodemographics (exercises)
14:45-15:45 Spatio-temporal cubes, juridicial change
09:00-10:00 Spatio-temporal cubes, juridicial change (exercises)
10:15-11:15 Global and local spatial autocorrelation
11:15-12:00 Data puzzles (support)
13:00-14:15 Global and local spatial autocorrelation (exercises)
14:30-15:30 Residual autocorrelation (exercises)
09:00-10:15 Spatial autocorrelation and machine learning
10:30-11:30 Point pattern analysis
11:30-12:00 Point pattern analysis (exercises)
13:00-14:30 Transport, networks, transmission
14:45-15:45 Transport, networks, transmission (exercises)
Depending on the number of participants wishing to present project outlines (about 25 minutes, 10-15 minutes presentation, 10-15 minutes discussion, the number of slots may vary.
09:00-12:00 Presentation of up to six project outlines, with 20 min. break.
12:30-15:30 Presentation of up to six project outlines, with 20 min. break.
15:30-16:00 Round-up and feedback
Auditorium M is larger than the room initially assigned, but has been upgraded recently. We’ll find out practically how to get into this room (Door B, up stairs, turn left); in principle all teaching rooms are access-card-only, so those of us with will need to help others; perhaps Canvas messages could be used to contact others if you get stuck outside a locked door (the Canvas app can be set up to issue notifications when a message is received). Because anyone with a card can access the room, the room cannot be physically locked, so either we can agree among ourselves on who might stay in the room for each of two halves of the lunch break to avoid rigging down laptops. We’ll have to learn how to live with this, upgraded rooms have both positive and negative advantages.
We can probably stay in one part of the room; as far as I understand, the active camera is that facing the front. Microphones need to be used to be heard when streaming is activated. The streaming times are set at 09:00-12:00 and 13:00-15:45 Monday to Thursday, and 09:00-12:00 and 12:30-16:00 on Friday. Streamed content should be available in the Panopto Video tab in Canvas. It is now only available to those registered on this course and logged in. Raw recordings will be published successively in the same place with little delay for those who are at an awkward time-zone offset. Subsequently, edited recordings cutting out periods with no relevant content will replace the raw recordings. The recordings should show a view of the front of the room, beamer content, and sound captured by microphones.
Those who are not on-site will have to use Canvas messages to interact with us; not ideal but the video system has no two-way option. We are (at the time of writing) so close to on-site delivery, nothing has happened (so far) to force a reversion to hybrid, so delivery will be on-site plus streaming. Should anyone needing to present on Friday be hindered, we can fall back on a zoom session for everybody early the following week if the need arises.
The local weather forecast may be found at https://www.yr.no/en/forecast/daily-table/1-92388/Norway/Vestland/Bergen/Sandviken. Since working in auditorium M may be a bit constricting, we may break out of the schedule if the sun comes out (or even if it doesn’t), and for example walk into town (or part of the way) from mid-afternoon to open up for more informal discussions. For those from outside Bergen, the 7-day public transport ticket is convenient, for example using the Skyss app, see https://www.skyss.no/en for information.
Wifi internet is available throughout the NHH campus. If you already use eduroam, you may have been connected automatically through your home institution. If not, or if your eduroam connection does not work here, please consult https://www.nhh.no/en/about-nhh/it-support/it-support-for-guests/; unfortunately identity confirmation requires a Norwegian phone number, so please assist each other in getting online.
The main reason to indicate here how one may learn autonomously is because things change, and although efforts are made to inform about impending changes and enhancements, users are usually pretty much alone in exploring them. So encouraging confidence in searching for and using information matters.
In RStudio, the Help tab in the lower right pane (default position) gives access to the R manuals and to the installed packages help pages through the Packages link under Reference
In R itself, help pages are available in HTML (browser) and text form; help.start() uses the default browser to display the Manuals, Reference and Miscellaneous Material sections in RStudio’s home help tab
The search engine can be used to locate help pages, but is not great if many packages are installed, as no indices are stored
The help system needs to be learned in order to provide the user with ways of progressing without wasting too much time
The base help system does not tell you how to use R as a system, about packages not installed on your machine, or about R as a community
It does provide information about functions, methods and (some) classes in base R and in contributed packages installed on your machine
We’ll cover these first, then go on to look at vignettes, and task views
There are different requirements with regard to help systems - in R, the help pages of base R are expected to be accurate although terse
It has become an increasing problem (especially when package authors use roxygen to create help pages) that help pages are more terse than necessary; in which case interested users have to fall back on vignettes or ancillary online documentation
Each help page provides a short description of the functions, methods or classes it covers; some pages cover more than one such
Help pages are grouped by package, so that the browser-based system is not easy to browse if you do not know which package a function belongs to
The usage of the function is shown explicitly, including any defaults for arguments to functions or methods
Each argument is described, showing names and types; in addition details of the description are given, together with the value returned
Rather than starting from the packages hierarchy of help pages, users most often use the help function
The function takes the name of of the function about which we need help, the name may be in quotation marks; class names contain a hyphen and must be quoted
Instead of using say help(help), we can shorten to the question mark operator: ?help
Occasionally, several packages offer different functions with the same name, and we may be offered a choice; we can disambiguate by putting the package name and two colons before the function name
In the usage section, function arguments are shown by name and order; the args function returns information
In general, if arguments are given by name, the order is arbitrary, but if names are not used at least sometimes, order matters
Some arguments do not have default values and are probably required, although some are guessed if missing
Being explicit about the names of arguments and the values they take is helpful in scripting and reproducible research
The ellipsis ... indicates that the function itself examines objects passed to see what to do
The regular R console does not provide tooltips, that is a bubble first offering alternative function or object names as you type, then lists of argument names
RStudio, like many IDEs, does provide this, controlled by Tools -> Global options -> Code -> Completion (by default it is operative)
This may be helpful or not, depending on your style of working; if you find it helpful, fine, if not, you can make it less invasive under Global options
Other IDE have also provided this facility, which builds directly on the usage sections of help pages of functions in installed packages
Base R has a set of checks and tests that ensure coherence between the code itself and the usage sections in help pages
These mechanisms are used in checking contributed packages before they are released through the the archive network; the description of arguments on help pages must match the function definition
It is also possible to generate help pages documenting functions automatically, for example using the roxygen2 package
It is important to know that we can rely on this coherence
The objects returned by functions are also documented on help pages, but the coherence of the description with reality is harder to check
This means that use of str or other functions or methods may be helpful when we want to look inside the returned object
The form taken by returned values will often also vary, depending on the arguments given
Most help pages address this issue not by writing more about the returned values, but by using the examples section to highlight points of potential importance for the user
Reading the examples section on the help page is often enlightening, but we do not need to copy and paste
The example function runs those parts of the code in the examples section of a function that are not tagged don’t run - this can be overridden, but may involve meeting conditions not met on your machine
This code is run nightly on CRAN servers on multiple operating systems and using released, patched and development versions of R, so checking both packages and the three versions of R
Some examples use data given verbatim, but many use built-in data sets; most packages also provide data sets to use for running examples
This means that the examples and the built-in data sets are a most significant resource for learning how to solve problems with R
Very often, one recognizes classic textbook data sets from the history of applied statistics; contemporary text book authors often publish collections of data sets as packages on CRAN
The built-in data sets also have help pages, describing their representation as R objects, and their licence and copyright status
These help pages also often include an examples section showing some of the analyses that may be carried out using them
One approach that typically works well when you have a data set of your own, but are unsure how to proceed, is to find a built-in data set that resembles the real one, and play with that first
The built-in data sets are often quite small, and if linked to text books, they are well described there as well as in the help pages
By definition, the built-in data sets do not have to be imported into R, as they are almost always stored as files of R objects
In some cases, these data sets are stored in external file formats, most often to show how to read those formats
The built-in data sets in the base datasets package are in the search path, but data sets in other packages should be loaded using the data() function:
str(Titanic)
## 'table' num [1:4, 1:2, 1:2, 1:2] 0 0 35 0 0 0 17 0 118 154 ...
## - attr(*, "dimnames")=List of 4
## ..$ Class : chr [1:4] "1st" "2nd" "3rd" "Crew"
## ..$ Sex : chr [1:2] "Male" "Female"
## ..$ Age : chr [1:2] "Child" "Adult"
## ..$ Survived: chr [1:2] "No" "Yes"
library(MASS)
data(deaths)
str(deaths)
## Time-Series [1:72] from 1974 to 1980: 3035 2552 2704 2554 2014 ...
At about the time that literate programming arrived in R with Sweave and Stangle - we mostly use knitr now - the idea arose of supplementing package documentation with example workflows
Vignettes are PDF documents with accompanying runnable R code that describe how to carry out particular sequences of operations
The RStudio packages help tab package index file shows user guides, package vignettes and other documentation
The vignette() function can be used to list vignettes by installed package, and to open the chosen vignette in a PDF reader
A very typical way of using vignettes on a machine with enough screen space is to read the document and run the code from the R file at the same time
Assign the output of vignette to an object; the print method shows the PDF or HTML, the edit method gives direct access to the underlying code for copy and paste
The help system in RStudio provides equivalent access to vignette documents and code
Papers about R contributed packages published in the Journal of Statistical Software and the R Journal are often constructed in this way too
CRAN task views https://cloud.r-project.org/web/views were introduced to try to provide some subject area guidance
They remain terse, and struggle to keep up, but are still worth reviewing
Note that those working in different subject areas often see things rather differently, leading to subject specific treatment of intrinsically similar themes
The R community has become a number of linked communities rather than a coherent and hierarchical whole
As in many open source projects, the R project is more basaar than cathedral; think of niches in ecosystems with differing local optima in contrast to a master plan
One style is based on mailing lists, in which an issue raised by an original poster is resolved later in that thread
Another style is to use online fora, such as StackOverflow, which you need to visit rather than receiving messages in your inbox
New aggregated blog topics are linked to a Twitter account, so if you want, you too can be bombarded by notifications
Twitter hashtags #rstats and #rspatial are relevant; the R Foundation is also present on fosstodon.org
These are also a potential source of project ideas, especially because some claims should be challenged
R started as a teaching tool for applied statistics, but this community model has been complemented by others
R is now widely used in business, public administration and voluntary organizations for data analysis and visualization
The R Consortium was created in 2015 as a vehicle for companies with relationships to R
R itself remains under the control of the R Foundation, which is still mostly academic in flavour
Visualization is one of the key strengths of R. However, visualization involves many choices, and R offers such a wide range of choices that guidance is desirable. There are two major underlying techologies, base graphics and grid graphics, and several toolboxes built on these (trellis graphics and grammar of graphics on grid graphics), in addition to JavaScript widgets for interaction. In addition, much work has been done on the effective use of shapes and colours https://colorbrewer2.org, https://hclwizard.org/ (and https://www.nature.com/articles/nmeth.1618). See also the most recent AltText movement: https://webaim.org/techniques/alttext/ https://support.microsoft.com/en-us/office/everything-you-need-to-know-to-write-effective-alt-text-df98f884-ca3d-456c-807b-1a1fa82f5dc2
We can distinguish between presentation graphics and analytical graphics. Presentation graphics are intended for others and are completed (even if interactive). Analytical graphics may evolve into presentation graphics, but their main purpose is to visualize the data being analysed (see Antony Unwin’s book and (http://www.gradaanwr.net/), and Claus Wilke’s https://clauswilke.com/dataviz/. Many of the researchers who have developed approaches to visualization have been involved with Bell Labs, where S came from https://priceonomics.com/how-william-cleveland-turned-data-visualization/.
As installed, R provides two graphics approaches, one known as base graphics, the other trellis or lattice graphics. Most types of visualization are available for both, but lattice graphics were conceived to handle conditioning, for example to generate matching plots for different categories. Many of these were based on the data-ink ratio, favouring graphics with little or no extraneous detail (Edward Tufte) - see Lukasz Piwek’s blog http://motioninsocial.com/tufte/. There are other approaches, such as Leland Wilkinson’s Grammar of Graphics, implemented in Hadley Wickham’s ggplot2 package, which we will also be using here.
So there are presentation and analytical graphics, and there can be a number of approaches to how best to communicate information in either of those modes. R can create excellent presentation graphics, or provide input for graphics designers to improve for print or other media. What we need now are the most relevant simple analytical graphics for the kinds of data we use.
Histograms and many other kinds of chart require user choices about the number of bins to be used and bin/class intervals. This may also be termed quantization: the division of part of the real line on which we have measured a variable into intervals. This can also apply to combined categories if they are recoded to reduce the number of alternatives to be displayed. Class intervals are much used in thematic cartography, and I’m the author of the classInt package.
Class intervals can be chosen in many ways, and some have been collected for convenience in the classInt package. The first problem is to assign class boundaries to values in a single dimension, for which many classification techniques may be used, including pretty, quantile, natural breaks among others, or even simple fixed values. From there, the intervals can be used to generate colours from a colour palette, using the very nice colorRampPalette() function. Because there are potentially many alternative class memberships even for a given number of classes, choosing a communicative set matters.
We may choose the number of intervals ourselves arbitrarily or after examination of the data, or use provided functions, such as nclass.Sturges(), nclass.scott() or nclass.FD(). In hist(), nclass.Sturges() is used by default. We can also split on sign(), but handling diverging intervals often involves more work.
The default intervals for bins in hist() are pretty(range(x), n = breaks, min.n = 1), where breaks <- nclass.Sturges(x). The function computes a sequence of about n+1 equally spaced ‘round’ values which cover the range of the values in x. The values are chosen like values of coins or banknotes (1, 2, 5, etc.).
If we use the classIntervals() function from classInt, we can pass through arguments to the function called through style=, and note that n will not necessarily be the number of output classes. By default, intervalClosure= is "left", so [-30, -20) means numbers greater than and equal to (>=) -30 and less than (<) -20; [10, 20] is numbers >= 10 and <= 20.
The tmap https://r-tmap.github.io/tmap/ package provides cartographically informed, grammar of graphics (gg) based functionality now, like ggplot2 using grid graphics. John McIntosh tried with ggplot2 https://www.johnmackintosh.net/blog/2017-08-22-simply-mapping/, with quite nice results. I suggested he look at tmap, and things got better https://www.johnmackintosh.net/blog/2017-09-01-easy-maps-with-tmap/, because tmap can switch between interactive and static viewing. tmap also provides direct access to classInt class intervals. Like the sf::plot() method, tmap plotting can use classInt internally and accepts a palette (try looking at tmaptools::palette_explorer() for ColorBrewer palettes, or at the rcartocolor package).
The underlying logic of conditioned graphics is that multiple displays (windows, panes) use the same scales and representational elements for comparison. Using the same scales and representational elements for comparison can be done manually, imposing the same scales, colours and shapes in each plot and laying the plots out in a grid. Trellis graphics automated this in S, and lattice provides similar but enhanced facilities in R with a formula interface. ggplot2 and other packages also provide similar functionalities.
In early R, all the graphics functionality was in base; graphics was split out of base in 1.9.0 (2004-04-12), and grDevices in 2.0.0 (2004-10-04). When R starts now, the graphics and grDevices packages are loaded and attached, ready for use. graphics provides the user-level graphical functions and methods, especially the most used plot() methods that many other packages extend. grDevices provides lower-level interfaces to graphics devices, some of which create files and others display windows. The capabilities() function shows what R itself can offer, including non-graphics capabilities, and we can also check the versions of external software used:
capabilities()
## jpeg png tiff tcltk X11 aqua
## TRUE TRUE TRUE TRUE TRUE FALSE
## http/ftp sockets libxml fifo cledit iconv
## TRUE TRUE FALSE TRUE FALSE TRUE
## NLS Rprof profmem cairo ICU long.double
## TRUE TRUE FALSE TRUE TRUE TRUE
## libcurl
## TRUE
grSoftVersion()
## cairo cairoFT pango
## "1.17.6" "" "1.50.9"
## libpng jpeg libtiff
## "1.6.37" "6.2" "LIBTIFF, Version 4.4.0"
Most of base graphics is vector graphics, but some innovations apply both to base and grid. The gridBase package permits base graphics elements, often created as plot() methods in contributed packages, to be placed in grid graphics displays.
We can check the relative standing of graphics and grid from the CRAN package database, and add lattice and ggplot2 (Suggests are typically in examples):
db <- tools::CRAN_package_db()
types <- c("Depends", "Imports", "Suggests")
pkgs <- c("graphics", "grid", "lattice", "ggplot2")
(tbl <- sapply(types, function(type) sapply(pkgs,
function(pkg) length(db[grep(pkg, db[, type]), 1]))))
## Depends Imports Suggests
## graphics 288 2084 90
## grid 80 800 298
## lattice 117 263 267
## ggplot2 370 2691 1205
class(tbl)
## [1] "matrix" "array"
library(lattice)
barchart(tbl, auto.key=TRUE, horizontal=FALSE)
The grid and lattice entered as Recommended in R 1.5.0 in April 2002, and grid became a base package in 1.8.0 in October 2003. Some changes were made in grid for R 3, but its structure remains very stable. The gridBase and gridGraphics packages provide functions for capturing the state of the current device drawn with base graphics tools. One reason for this is the unsolved problem of testing graphics output for identity, to ensure that the same commands for the same data give the same output; for grid objects this is feasible, but not for base graphics on interactive devices. Over and above the use of grid directly, the general-purpose packages lattice and ggplot2 build on grDevices and grid. In addition, it is worth mentioning the vcd (visualizing categorical data) and vcdExtra packages and a recent book on http://ddar.datavis.ca/.
We can combine grob from different sources
b <- barchart(tbl, auto.key=TRUE,
horizontal=FALSE)
x11()
barplot(t(tbl), legend.text=TRUE,
args.legend=list(x="top", bty="n",
cex=0.8, y.intersp=3))
gridGraphics::grid.echo()
library(grid)
g <- grid.grab()
dev.off()
## png
## 2
grid.newpage()
gridExtra::grid.arrange(g, b, ncol=2)
grid pushes viewports onto a stack, then pops them, see R Graphics and vignettes
grid.rect(gp = gpar(lty = "dashed"))
vp <- viewport(width = 0.5, height = 0.5)
pushViewport(vp)
grid.rect(gp = gpar(col = "grey"))
grid.text("quarter of the page", y = 0.85)
pushViewport(vp)
grid.rect()
grid.text("quarter of the\nprevious viewport")
popViewport(2)
Just reading the print method for ggplot objects shows how close grid is under ggplot2.
ggplot2:::print.ggplot
## function (x, newpage = is.null(vp), vp = NULL, ...)
## {
## set_last_plot(x)
## if (newpage)
## grid.newpage()
## grDevices::recordGraphics(requireNamespace("ggplot2", quietly = TRUE),
## list(), getNamespace("ggplot2"))
## data <- ggplot_build(x)
## gtable <- ggplot_gtable(data)
## if (is.null(vp)) {
## grid.draw(gtable)
## }
## else {
## if (is.character(vp))
## seekViewport(vp)
## else pushViewport(vp)
## grid.draw(gtable)
## upViewport()
## }
## if (isTRUE(getOption("BrailleR.VI")) && rlang::is_installed("BrailleR")) {
## print(asNamespace("BrailleR")$VI(x))
## }
## invisible(x)
## }
## <bytecode: 0x5b27748>
## <environment: namespace:ggplot2>
There are various lower and higher-level ways of combining graphical output: some are described in https://cran.r-project.org/web/packages/egg/vignettes/Ecosystem.html in the egg package
df <- cbind(expand.grid(pkgs, types), n=c(tbl))
library(ggplot2)
gg <- ggplot(df, aes(x=Var1, y=n, fill=Var2)) + geom_col() +
xlab("") + guides(fill=guide_legend(title="")) +
theme(legend.position="top")
gg2 <- ggplotGrob(gg)
t <- gridExtra::tableGrob(as.data.frame(tbl))
gridExtra::grid.arrange(g, b, gg2, t, ncol=2, nrow=2)
Current useful sources are:
https://geocompr.robinlovelace.net/spatial-class.html#crs-intro https://geocompr.robinlovelace.net/reproj-geo-data.html
https://clauswilke.com/dataviz/geospatial-data.html#projections
https://r-spatial.org/book/02-Spaces.html https://edzer.github.io/sdsr_exercises/02.html
Some presentation projections are not available through the standard GDAL/PROJ ecosystem, but can be found in javascript libraries:
https://riatelab.github.io/bertin/ https://observablehq.com/@neocartocnrs/hello-bertin-js https://observablehq.com/collection/@neocartocnrs/bertin https://observablehq.com/@neocartocnrs/bertin-js-projections?collection=@neocartocnrs/bertin https://observablehq.com/collection/@d3/d3-geo-projection
library(sf)
## Linking to GEOS 3.11.0, GDAL 3.6.0, PROJ 9.1.0; sf_use_s2() is TRUE
library(sf)
chicago_tracts <- st_read("chicago_tracts.gpkg")
## Reading layer `chicago_tracts' from data source
## `/home/rsb/und/ecs530/ECS530_h22/chicago_tracts.gpkg' using driver `GPKG'
## Simple feature collection with 2195 features and 28 fields
## Geometry type: MULTIPOLYGON
## Dimension: XY
## Bounding box: xmin: -88.94229 ymin: 40.73651 xmax: -86.92936 ymax: 42.66976
## Geodetic CRS: NAD83
We’ll use the tmap package with class interval styles from the classInt package:
library(tmap)
tm_shape(chicago_tracts) + tm_fill("med_inc_cv", style="fisher", n=7, title="Coefficient of Variation")
In ESRI documentation, CV thresholds of 12 and 40 percent are proposed for the transformed reported MOE values: https://doc.arcgis.com/en/esri-demographics/data/acs.htm. We’ll create a classified variable (ordered factor):
chicago_tracts$mi_cv_esri <- cut(chicago_tracts$med_inc_cv, c(0, 0.12, 0.40, Inf), labels=c("High", "Medium", "Low"), right=TRUE, include.lowest=TRUE, ordered_result=TRUE)
table(chicago_tracts$mi_cv_esri)
##
## High Medium Low
## 1400 766 29
and map it:
tm_shape(chicago_tracts) + tm_fill("mi_cv_esri", title="Reliability")
As the Low reliability tracts are small in size, the "view" mode for interactive mapping may help:
tmap_mode("view")
## tmap mode set to interactive viewing
tm_shape(chicago_tracts) + tm_fill("mi_cv_esri", title="Reliability")
tmap_mode("plot")
## tmap mode set to plotting
Or equivalently using mapview, plotting the CV values rather than the ordered factor, which is not yet well-supported:
library(mapview)
mapviewOptions(fgb = FALSE)
mapview(chicago_tracts[,"med_inc_cv"], layer.name="Coefficient of Variation")
The RColorBrewer package gives by permission access to the ColorBrewer palettes accesible from the ColorBrewer website. Note that ColorBrewer limits the number of classes tightly, only 3–9 sequential classes
We can also display all the ColorBrewer palettes:
library(RColorBrewer)
display.brewer.all()
Try exploring alternative class interval definitions and palettes, maybe also visiting http://hclwizard.org/ and its hclwizard() Shiny app, returning a palette generating function on clicking the “Return to R” button:
library(colorspace)
hcl_palettes("sequential (single-hue)", n = 7, plot = TRUE)
pal <- hclwizard()
pal(6)
The end of rainbow discussion is informative:
wheel <- function(col, radius = 1, ...)
pie(rep(1, length(col)), col = col, radius = radius, ...)
opar <- par(mfrow=c(1,2))
wheel(rainbow_hcl(12))
wheel(rainbow(12))
par(opar)
More discussions of useful mapping examples can be found in:
https://clauswilke.com/dataviz/geospatial-data.html
https://r-spatial.org/book/08-Plotting.html https://edzer.github.io/sdsr_exercises/08.html
https://geocompr.robinlovelace.net/adv-map.html
https://walker-data.com/census-r/mapping-census-data-with-r.html
https://rspatial.org/terra/index.html
These are the three talks I’ve already suggested you review for pattern-finding raster/GIS-style, together with the package links:
IALE talk, landscapemetrics https://cran.r-project.org/package=landscapemetrics https://www.youtube.com/watch?v=jtAxJ-S89qI
OGH talk supercells https://cran.r-project.org/package=supercells https://cran.r-project.org/package=regional https://av.tib.eu/media/54880
RGS-IBG talk motif https://cran.r-project.org/package=motif https://www.youtube.com/watch?v=CDgsgcvsg_Y
Over and above these, I suggest SKATER spdep::skater(), also implemented as rgeoda::skater().
https://geodacenter.github.io/documentation.html contains aspatial and spatial clustering approaches. See also https://cran.r-project.org/view=Cluster.
Beyond these, please refer to local measures of spatial association or local spatial autocorrelation, and to other hotspot measures in the spatial epidemiology literature:
https://cran.r-project.org/package=DCluster https://cran.r-project.org/package=DClusterm